Search for a command to run...
NVIDIA CEO Jensen Huang discusses the transformative potential of AI in 2025, highlighting advancements in reasoning, robotics, and productivity across industries while refuting doomsday narratives and emphasizing the importance of open source and nuanced technological development.
Cal Newport and Ed Zitron dissect the tumultuous year of AI in 2025, revealing a narrative of technological hype, financial unsustainability, and diminishing returns, ultimately concluding that it was a terrible year for artificial intelligence.
Yoshua Bengio, a pioneer of AI, warns about the potential catastrophic risks of artificial intelligence, advocating for responsible development, technical safeguards, and global cooperation to mitigate existential threats before it's too late.
A deep dive into AI's potential transformative impact, exploring whether it's just another platform shift or something closer to electricity, examining technological bottlenecks, industry implications, and the uncertain path to realizing AI's full potential.
In a candid conversation with cardiologist Eric Topol, Adam Grant explores cutting-edge insights on longevity, debunking health myths, preventing major diseases, and the potential of AI in transforming medical care.
Aidan Gomez, co-founder and CEO of Cohere, discusses the transformative potential of AI in enterprise, reflecting on his journey from Google Brain researcher to building an AI platform focused on deploying large language models across critical industries.
Dr. Fei-Fei Li discusses her groundbreaking work in AI, from creating ImageNet to launching Marble, a world-modeling platform that generates interactive 3D worlds, while emphasizing the importance of human-centered AI and individual responsibility in shaping technology's future.
Dr. Roman Yampolskiy, a computer science professor and AI safety expert, warns that artificial general intelligence (AGI) could arrive by 2027, potentially leading to 99% unemployment and posing an existential threat to humanity. He argues that we cannot control superintelligent AI and that its development could result in human extinction, while also discussing his belief that we are likely living in a simulation created by a more advanced intelligence.
Here's a two-sentence description for the episode: Nick Frosst, co-founder of Cohere, discusses the evolution of AI, critiquing Sam Altman's AGI predictions and emphasizing the importance of enterprise-focused language models. He shares insights on the potential of AI to transform work, the challenges of technological hype, and Cohere's mission to build a generational company focused on solving real-world problems.